101 research outputs found
Regularized Evolutionary Algorithm for Dynamic Neural Topology Search
Designing neural networks for object recognition requires considerable
architecture engineering. As a remedy, neuro-evolutionary network architecture
search, which automatically searches for optimal network architectures using
evolutionary algorithms, has recently become very popular. Although very
effective, evolutionary algorithms rely heavily on having a large population of
individuals (i.e., network architectures) and is therefore memory expensive. In
this work, we propose a Regularized Evolutionary Algorithm with low memory
footprint to evolve a dynamic image classifier. In details, we introduce novel
custom operators that regularize the evolutionary process of a micro-population
of 10 individuals. We conduct experiments on three different digits datasets
(MNIST, USPS, SVHN) and show that our evolutionary method obtains competitive
results with the current state-of-the-art
Self-building Neural Networks
During the first part of life, the brain develops while it learns through a
process called synaptogenesis. The neurons, growing and interacting with each
other, create synapses. However, eventually the brain prunes those synapses.
While previous work focused on learning and pruning independently, in this work
we propose a biologically plausible model that, thanks to a combination of
Hebbian learning and pruning, aims to simulate the synaptogenesis process. In
this way, while learning how to solve the task, the agent translates its
experience into a particular network structure. Namely, the network structure
builds itself during the execution of the task. We call this approach
Self-building Neural Network (SBNN). We compare our proposed SBNN with
traditional neural networks (NNs) over three classical control tasks from
OpenAI. The results show that our model performs generally better than
traditional NNs. Moreover, we observe that the performance decay while
increasing the pruning rate is smaller in our model than with NNs. Finally, we
perform a validation test, testing the models over tasks unseen during the
learning phase. In this case, the results show that SBNNs can adapt to new
tasks better than the traditional NNs, especially when over of the
weights are pruned.Comment: To appear in the Genetic and Evolutionary Computation Conference
Companion (GECCO '23 Companion) Proceedings, July 15--19, 2023, Lisbon,
Portuga
Evaluating MAP-Elites on Constrained Optimization Problems
Constrained optimization problems are often characterized by multiple
constraints that, in the practice, must be satisfied with different tolerance
levels. While some constraints are hard and as such must be satisfied with
zero-tolerance, others may be soft, such that non-zero violations are
acceptable. Here, we evaluate the applicability of MAP-Elites to "illuminate"
constrained search spaces by mapping them into feature spaces where each
feature corresponds to a different constraint. On the one hand, MAP-Elites
implicitly preserves diversity, thus allowing a good exploration of the search
space. On the other hand, it provides an effective visualization that
facilitates a better understanding of how constraint violations correlate with
the objective function. We demonstrate the feasibility of this approach on a
large set of benchmark problems, in various dimensionalities, and with
different algorithmic configurations. As expected, numerical results show that
a basic version of MAP-Elites cannot compete on all problems (especially those
with equality constraints) with state-of-the-art algorithms that use gradient
information or advanced constraint handling techniques. Nevertheless, it has a
higher potential at finding constraint violations vs. objectives trade-offs and
providing new problem information. As such, it could be used in the future as
an effective building-block for designing new constrained optimization
algorithms
Improved search methods for assessing Delay-Tolerant Networks vulnerability to colluding strong heterogeneous attacks
Increasingly more digital communication is routed among wireless, mobile computers over ad-hoc, unsecured communication channels. In this paper, we design two stochastic search algorithms (a greedy heuristic, and an evolutionary algorithm) which automatically search for strong insider attack methods against a given ad-hoc, delay-tolerant communication protocol, and thus expose its weaknesses. To assess their performance, we apply the two algorithms to two simulated, large-scale mobile scenarios (of different route morphology) with 200 nodes having free range of movement. We investigate a choice of two standard attack strategies (dropping messages and flooding the network), and four delay-tolerant routing protocols: First Contact, Epidemic, Spray and Wait, and MaxProp. We find dramatic drops in performance: replicative protocols (Epidemic, Spray and Wait, MaxProp), formerly deemed resilient, are compromised to different degrees (delivery rates between 24% and 87%), while a forwarding protocol (First Contact) is shown to drop delivery rates to under 5% — in all cases by well-crafted attack strategies and with an attacker group of size less than 10% the total network size. Overall, we show that the two proposed methods combined constitute an effective means to discover (at design-time) and raise awareness about the weaknesses and strengths of existing ad-hoc, delay-tolerant communication protocols against potential malicious cyber-attacks
Learning with Delayed Synaptic Plasticity
The plasticity property of biological neural networks allows them to perform
learning and optimize their behavior by changing their configuration. Inspired
by biology, plasticity can be modeled in artificial neural networks by using
Hebbian learning rules, i.e. rules that update synapses based on the neuron
activations and reinforcement signals. However, the distal reward problem
arises when the reinforcement signals are not available immediately after each
network output to associate the neuron activations that contributed to
receiving the reinforcement signal. In this work, we extend Hebbian plasticity
rules to allow learning in distal reward cases. We propose the use of neuron
activation traces (NATs) to provide additional data storage in each synapse to
keep track of the activation of the neurons. Delayed reinforcement signals are
provided after each episode relative to the networks' performance during the
previous episode. We employ genetic algorithms to evolve delayed synaptic
plasticity (DSP) rules and perform synaptic updates based on NATs and delayed
reinforcement signals. We compare DSP with an analogous hill climbing algorithm
that does not incorporate domain knowledge introduced with the NATs, and show
that the synaptic updates performed by the DSP rules demonstrate more effective
training performance relative to the HC algorithm.Comment: GECCO201
Limited Evaluation Cooperative Co-evolutionary Differential Evolution for Large-scale Neuroevolution
Many real-world control and classification tasks involve a large number of
features. When artificial neural networks (ANNs) are used for modeling these
tasks, the network architectures tend to be large. Neuroevolution is an
effective approach for optimizing ANNs; however, there are two bottlenecks that
make their application challenging in case of high-dimensional networks using
direct encoding. First, classic evolutionary algorithms tend not to scale well
for searching large parameter spaces; second, the network evaluation over a
large number of training instances is in general time-consuming. In this work,
we propose an approach called the Limited Evaluation Cooperative
Co-evolutionary Differential Evolution algorithm (LECCDE) to optimize
high-dimensional ANNs.
The proposed method aims to optimize the pre-synaptic weights of each
post-synaptic neuron in different subpopulations using a Cooperative
Co-evolutionary Differential Evolution algorithm, and employs a limited
evaluation scheme where fitness evaluation is performed on a relatively small
number of training instances based on fitness inheritance. We test LECCDE on
three datasets with various sizes, and our results show that cooperative
co-evolution significantly improves the test error comparing to standard
Differential Evolution, while the limited evaluation scheme facilitates a
significant reduction in computing time
- …